Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
1.
NPJ Digit Med ; 5(1): 130, 2022 Sep 01.
Article in English | MEDLINE | ID: covidwho-2008331

ABSTRACT

Mass surveillance testing can help control outbreaks of infectious diseases such as COVID-19. However, diagnostic test shortages are prevalent globally and continue to occur in the US with the onset of new COVID-19 variants and emerging diseases like monkeypox, demonstrating an unprecedented need for improving our current methods for mass surveillance testing. By targeting surveillance testing toward individuals who are most likely to be infected and, thus, increasing the testing positivity rate (i.e., percent positive in the surveillance group), fewer tests are needed to capture the same number of positive cases. Here, we developed an Intelligent Testing Allocation (ITA) method by leveraging data from the CovIdentify study (6765 participants) and the MyPHD study (8580 participants), including smartwatch data from 1265 individuals of whom 126 tested positive for COVID-19. Our rigorous model and parameter search uncovered the optimal time periods and aggregate metrics for monitoring continuous digital biomarkers to increase the positivity rate of COVID-19 diagnostic testing. We found that resting heart rate (RHR) features distinguished between COVID-19-positive and -negative cases earlier in the course of the infection than steps features, as early as 10 and 5 days prior to the diagnostic test, respectively. We also found that including steps features increased the area under the receiver operating characteristic curve (AUC-ROC) by 7-11% when compared with RHR features alone, while including RHR features improved the AUC of the ITA model's precision-recall curve (AUC-PR) by 38-50% when compared with steps features alone. The best AUC-ROC (0.73 ± 0.14 and 0.77 on the cross-validated training set and independent test set, respectively) and AUC-PR (0.55 ± 0.21 and 0.24) were achieved by using data from a single device type (Fitbit) with high-resolution (minute-level) data. Finally, we show that ITA generates up to a 6.5-fold increase in the positivity rate in the cross-validated training set and up to a 4.5-fold increase in the positivity rate in the independent test set, including both symptomatic and asymptomatic (up to 27%) individuals. Our findings suggest that, if deployed on a large scale and without needing self-reported symptoms, the ITA method could improve the allocation of diagnostic testing resources and reduce the burden of test shortages.

2.
JMIR Mhealth Uhealth ; 9(2): e24570, 2021 02 03.
Article in English | MEDLINE | ID: covidwho-1933473

ABSTRACT

BACKGROUND: The field of digital medicine has seen rapid growth over the past decade. With this unfettered growth, challenges surrounding interoperability have emerged as a critical barrier to translating digital medicine into practice. In order to understand how to mitigate challenges in digital medicine research and practice, this community must understand the landscape of digital medicine professionals, which digital medicine tools are being used and how, and user perspectives on current challenges in the field of digital medicine. OBJECTIVE: The primary objective of this study is to provide information to the digital medicine community that is working to establish frameworks and best practices for interoperability in digital medicine. We sought to learn about the background of digital medicine professionals and determine which sensors and file types are being used most commonly in digital medicine research. We also sought to understand perspectives on digital medicine interoperability. METHODS: We used a web-based survey to query a total of 56 digital medicine professionals from May 1, 2020, to July 10, 2020, on their educational and work experience, the sensors, file types, and toolkits they use professionally, and their perspectives on interoperability in digital medicine. RESULTS: We determined that the digital medicine community comes from diverse educational backgrounds and uses a variety of sensors and file types. Sensors measuring physical activity and the cardiovascular system are the most frequently used, and smartphones continue to be the dominant source of digital health information collection in the digital medicine community. We show that there is not a general consensus on file types in digital medicine, and data are currently handled in multiple ways. There is consensus that interoperability is a critical impediment in digital medicine, with 93% (52) of survey respondents in agreement. However, only 36% (20) of respondents currently use tools for interoperability in digital medicine. We identified three key interoperability needs to be met: integration with electronic health records, implementation of standard data schemas, and standard and verifiable methods for digital medicine research. We show that digital medicine professionals are eager to adopt new tools to solve interoperability problems, and we suggest tools to support digital medicine interoperability. CONCLUSIONS: Understanding the digital medicine community, the sensors and file types they use, and their perspectives on interoperability will enable the development and implementation of solutions that fill critical interoperability gaps in digital medicine. The challenges to interoperability outlined by this study will drive the next steps in creating an interoperable digital medicine community. Establishing best practices to address these challenges and employing platforms for digital medicine interoperability will be essential to furthering the field of digital medicine.


Subject(s)
Electronic Health Records , Smartphone , Humans , Surveys and Questionnaires
3.
Annu Rev Biomed Eng ; 24: 1-27, 2022 06 06.
Article in English | MEDLINE | ID: covidwho-1593636

ABSTRACT

Mounting clinical evidence suggests that viral infections can lead to detectable changes in an individual's normal physiologic and behavioral metrics, including heart and respiration rates, heart rate variability, temperature, activity, and sleep prior to symptom onset, potentially even in asymptomatic individuals. While the ability of wearable devices to detect viral infections in a real-world setting has yet to be proven, multiple recent studies have established that individual, continuous data from a range of biometric monitoring technologies can be easily acquired and that through the use of machine learning techniques, physiological signals and warning signs can be identified. In this review, we highlight the existing knowledge base supporting the potential for widespread implementation of biometric data to address existing gaps in the diagnosis and treatment of viral illnesses, with a particular focus on the many important lessons learned from the coronavirus disease 2019 pandemic.


Subject(s)
COVID-19 , Wearable Electronic Devices , Biometry , COVID-19/diagnosis , Humans
4.
J Med Internet Res ; 23(9): e29875, 2021 09 15.
Article in English | MEDLINE | ID: covidwho-1443976

ABSTRACT

BACKGROUND: Digital clinical measures collected via various digital sensing technologies such as smartphones, smartwatches, wearables, ingestibles, and implantables are increasingly used by individuals and clinicians to capture health outcomes or behavioral and physiological characteristics of individuals. Although academia is taking an active role in evaluating digital sensing products, academic contributions to advancing the safe, effective, ethical, and equitable use of digital clinical measures are poorly characterized. OBJECTIVE: We performed a systematic review to characterize the nature of academic research on digital clinical measures and to compare and contrast the types of sensors used and the sources of funding support for specific subareas of this research. METHODS: We conducted a PubMed search using a range of search terms to retrieve peer-reviewed articles reporting US-led academic research on digital clinical measures between January 2019 and February 2021. We screened each publication against specific inclusion and exclusion criteria. We then identified and categorized research studies based on the types of academic research, sensors used, and funding sources. Finally, we compared and contrasted the funding support for these specific subareas of research and sensor types. RESULTS: The search retrieved 4240 articles of interest. Following the screening, 295 articles remained for data extraction and categorization. The top five research subareas included operations research (research analysis; n=225, 76%), analytical validation (n=173, 59%), usability and utility (data visualization; n=123, 42%), verification (n=93, 32%), and clinical validation (n=83, 28%). The three most underrepresented areas of research into digital clinical measures were ethics (n=0, 0%), security (n=1, 0.5%), and data rights and governance (n=1, 0.5%). Movement and activity trackers were the most commonly studied sensor type, and physiological (mechanical) sensors were the least frequently studied. We found that government agencies are providing the most funding for research on digital clinical measures (n=192, 65%), followed by independent foundations (n=109, 37%) and industries (n=56, 19%), with the remaining 12% (n=36) of these studies completely unfunded. CONCLUSIONS: Specific subareas of academic research related to digital clinical measures are not keeping pace with the rapid expansion and adoption of digital sensing products. An integrated and coordinated effort is required across academia, academic partners, and academic funders to establish the field of digital clinical measures as an evidence-based field worthy of our trust.


Subject(s)
Delivery of Health Care , Smartphone , Humans
5.
JAMA Netw Open ; 4(9): e2128534, 2021 09 01.
Article in English | MEDLINE | ID: covidwho-1441922

ABSTRACT

Importance: Currently, there are no presymptomatic screening methods to identify individuals infected with a respiratory virus to prevent disease spread and to predict their trajectory for resource allocation. Objective: To evaluate the feasibility of using noninvasive, wrist-worn wearable biometric monitoring sensors to detect presymptomatic viral infection after exposure and predict infection severity in patients exposed to H1N1 influenza or human rhinovirus. Design, Setting, and Participants: The cohort H1N1 viral challenge study was conducted during 2018; data were collected from September 11, 2017, to May 4, 2018. The cohort rhinovirus challenge study was conducted during 2015; data were collected from September 14 to 21, 2015. A total of 39 adult participants were recruited for the H1N1 challenge study, and 24 adult participants were recruited for the rhinovirus challenge study. Exclusion criteria for both challenges included chronic respiratory illness and high levels of serum antibodies. Participants in the H1N1 challenge study were isolated in a clinic for a minimum of 8 days after inoculation. The rhinovirus challenge took place on a college campus, and participants were not isolated. Exposures: Participants in the H1N1 challenge study were inoculated via intranasal drops of diluted influenza A/California/03/09 (H1N1) virus with a mean count of 106 using the median tissue culture infectious dose (TCID50) assay. Participants in the rhinovirus challenge study were inoculated via intranasal drops of diluted human rhinovirus strain type 16 with a count of 100 using the TCID50 assay. Main Outcomes and Measures: The primary outcome measures included cross-validated performance metrics of random forest models to screen for presymptomatic infection and predict infection severity, including accuracy, precision, sensitivity, specificity, F1 score, and area under the receiver operating characteristic curve (AUC). Results: A total of 31 participants with H1N1 (24 men [77.4%]; mean [SD] age, 34.7 [12.3] years) and 18 participants with rhinovirus (11 men [61.1%]; mean [SD] age, 21.7 [3.1] years) were included in the analysis after data preprocessing. Separate H1N1 and rhinovirus detection models, using only data on wearble devices as input, were able to distinguish between infection and noninfection with accuracies of up to 92% for H1N1 (90% precision, 90% sensitivity, 93% specificity, and 90% F1 score, 0.85 [95% CI, 0.70-1.00] AUC) and 88% for rhinovirus (100% precision, 78% sensitivity, 100% specificity, 88% F1 score, and 0.96 [95% CI, 0.85-1.00] AUC). The infection severity prediction model was able to distinguish between mild and moderate infection 24 hours prior to symptom onset with an accuracy of 90% for H1N1 (88% precision, 88% sensitivity, 92% specificity, 88% F1 score, and 0.88 [95% CI, 0.72-1.00] AUC) and 89% for rhinovirus (100% precision, 75% sensitivity, 100% specificity, 86% F1 score, and 0.95 [95% CI, 0.79-1.00] AUC). Conclusions and Relevance: This cohort study suggests that the use of a noninvasive, wrist-worn wearable device to predict an individual's response to viral exposure prior to symptoms is feasible. Harnessing this technology would support early interventions to limit presymptomatic spread of viral respiratory infections, which is timely in the era of COVID-19.


Subject(s)
Biometry/methods , Common Cold/diagnosis , Influenza A Virus, H1N1 Subtype , Influenza, Human/diagnosis , Rhinovirus , Severity of Illness Index , Wearable Electronic Devices , Adult , Area Under Curve , Biological Assay , Biometry/instrumentation , Cohort Studies , Common Cold/virology , Early Diagnosis , Feasibility Studies , Female , Humans , Influenza A Virus, H1N1 Subtype/growth & development , Influenza, Human/virology , Male , Mass Screening , Models, Biological , Rhinovirus/growth & development , Sensitivity and Specificity , Virus Shedding , Young Adult
6.
JMIR Mhealth Uhealth ; 8(12): e25137, 2020 12 22.
Article in English | MEDLINE | ID: covidwho-993100

ABSTRACT

Recently, companies such as Apple Inc, Fitbit Inc, and Garmin Ltd have released new wearable blood oxygenation measurement technologies. Although the release of these technologies has great potential for generating health-related information, it is important to acknowledge the repercussions of consumer-targeted biometric monitoring technologies (BioMeTs), which in practice, are often used for medical decision making. BioMeTs are bodily connected digital medicine products that process data captured by mobile sensors that use algorithms to generate measures of behavioral and physiological function. These BioMeTs span both general wellness products and medical devices, and consumer-targeted BioMeTs intended for general wellness purposes are not required to undergo a standardized and transparent evaluation process for ensuring their quality and accuracy. The combination of product functionality, marketing, and the circumstances of the global SARS-CoV-2 pandemic have inevitably led to the use of consumer-targeted BioMeTs for reporting health-related measurements to drive medical decision making. In this viewpoint, we urge consumer-targeted BioMeT manufacturers to go beyond the bare minimum requirements described in US Food and Drug Administration guidance when releasing information on wellness BioMeTs. We also explore new methods and incentive systems that may result in a clearer public understanding of the performance and intended use of consumer-targeted BioMeTs.


Subject(s)
Fitness Trackers/trends , Pandemics , Wearable Electronic Devices/standards , COVID-19 , Humans , Wearable Electronic Devices/adverse effects
SELECTION OF CITATIONS
SEARCH DETAIL